Making Others Believe What They Want
نویسندگان
چکیده
We study the interplay between argumentation and belief revision within the MAS framework. When an agent uses an argument to persuade another one, he must consider not only the proposition supported by the argument, but also the overall impact of the argument on the beliefs of the addressee. Different arguments lead to different belief revisions by the addressee. We propose an approach whereby the best argument is defined as the one which is both rational and the most appealing to the addressee. 1 A motivating example Galbraith [5] put forward examples of public communication where speakers have to address a politically oriented audience. He noticed how it is difficult to propose them views which contrast with their goals, values, and what they already know. Speaker S , a financial advisor, has to persuade addressee R, an investor, who desires to invest a certain amount of money (im). S has two alternative arguments in support of a proposition wd (“The dollar is weak”) he wants R to believe, one based on bt → wd and one on hb→ wd: 1. “The dollar is weak (wd) since the balance of trade is negative (bt), due to high import (hi)” (a = 〈{bt → wd,hi→ bt,hi}〉) Guido Boella Universita di Torino, Dipartimento di Informatica 10149, Torino, Cso Svizzera 185, Italy, e-mail: [email protected] Célia da Costa Pereira and Andrea G. B. Tettamanzi Università degli Studi di Milano, Dip. Tecnologie dell’Informazione, Via Bramante 65, I-26013 Crema (CR), Italy, e-mail: {pereira,tettamanzi}@dti.unimi.it Leendert van der Torre Université du Luxembourg, Computer Science and Communication L-1359 Luxembourg, rue Richard Coudenhove – Kalergi 6, Luxembourg, e-mail: [email protected] G. Boella, C. da Costa Pereira, A. Tettamanzi, and L. van der Torre 2. “The dollar is weak (wd) due to the housing bubble (hb) created by excess subprime mortgages (sm)” (hb→ wd, sm→ hb, sm). And to the reply of R: “There is no excess of subprime mortgages (sm) since the banks are responsible (rb)” (rb→¬sm, rb), S counters that “The banks are not responsible (rb) as the Enron case shows (ec)” (ec→¬rb, ec). Assume that both agents consider a supported proposition stronger than an unsupported one (e.g., ec→¬rb prevails on rb alone). Although, from a logical point of view, both arguments make the case for wd, they are very different if we consider other dimensions concerning the addressee R. For example, even if R could accept wd, other parts of the arguments have different impacts. Accepting the arguments implies not only believing wd, but also the whole argument from which wd follows (unless we have an irrational agent which accepts the conclusion of an argument but not the reasons supporting the conclusion). This means that R undergoes a phase of belief revision to accept the support of the argument, resulting in a new view of the world. Before dropping his previous view of the world and adopting the new one, he has to compare them. • The state of the world resulting from the revision is less promising from the point of view of the possibility for R of reaching his goals. E.g., if the banks are not responsible, it is difficult to achieve his goal of investing money im. • The state of the world resulting from the revision contrasts with his values. E.g., he has a subprime mortgage and he does not like a world where subprime mortgages are risky due to their excess. • He never heard about hb→ wd, even if he trusts S ; this is new information for him. Thus R is probably leaning to accept the first argument which does not interact with his previous goals and beliefs, rather than to accept the second one, which, above all, depicts a scenario which is less promising for his hopes of making money by investing. Thus, a smart advisor, which is able to figure out the profile of the investor, will resort to the first argument rather than to the second one. Even if such evaluation of R’s in deciding what to believe can lead to partially irrational decisions, this is what happens in humans. Both economists like Galbraith and cognitive scientists like Castelfranchi [8] support this view. Thus, S should take advantage of this mechanism of reasoning. In particular, an agent could pretend to have accepted the argument at the public level, since he cannot reply anymore to the persuader and he does not want to appear irrational. However, privately, and in particular when the time comes to make a decision, he will stick to his previous beliefs. For this reason, if we want to build agents which are able to interact with humans, or believable agents, or if we want to use agent models as formal models for phenomena which are studied informally in other fields like economics, sociology, and cognitive science, and, moreover, to avoid that our agents are cheated by other agents which exploit mechanisms like the one proposed here, these phenomena must be studied. Making Others Believe What They Want 2 Argumentation Theory We adopt a simple framework for argumentation along the lines of Dung’s original proposal [4] by instantiating the notion of argument as an explanation-based argument. Given a set of formulas L, an argument over L is a pair A = 〈H,h〉 such that H ⊆ L, H is consistent, H ` h, and H is minimal (for set inclusion) among the sets satisfying the former three conditions. On the set of arguments Arg, a priority relation o is defined, A1 o A2 meaning that A1 has priority over A2. Let A1 = 〈H1,h1〉 and A2 = 〈H2,h2〉 be two arguments. A1 undercuts A2, in symbols A1 à A2, if ∃h2 ∈ H2 such that h1 ≡ ¬h2. A1 rebuts A2, in symbols A1 ( A2, if h1 ≡ ¬h2 (note that ( is symmetric); finally, A1 attacks A2, in symbols A1 Ã(A2, if (i) A1 ( A2 or A1 à A2 and, (ii) if A2 ( A1 or A2 à A1, A2 6 A1. The semantics of Dung’s argumentation framework is based on the two notions of defence and conflict-freeness. Definition 1. A set of arguments S defends an argument A iff, for each argument B ∈ Arg such that BÃ(A, there exists an argument C ∈ S such that CÃ(B. Definition 2. A set of arguments S is conflict-free iff there are no A,B ∈ S such that AÃ(B. The following definition summarizes various semantics of acceptable arguments proposed in the literature. The output of the argumentation framework is derived from the set of acceptable arguments which are selected with respect to an acceptability semantics. Definition 3. Let S⊆ Arg. • S is admissible iff it is conflict-free and defends all its elements. • A conflict-free S is a complete extension iff S = {A | S defends A}. • S is a grounded extension iff it is the smallest (for set inclusion) complete extension. • S is a preferred extension iff it is a maximal (for set inclusion) complete extension. • S is a stable extension iff it is a preferred extension that attacks all arguments in Arg\S. In this paper we use the unique grounded extension, written as E(Arg,o). Many properties and relations among these semantics have been studied by Dung and others. Example 1. The example of Section 1 can be formalized as follows in terms of arguments. a = 〈{bt → wd,hi→ bt,hi},wd〉, b = 〈{eg→¬hi,eg},¬hi)〉, c = 〈{de→¬eg,de},¬eg〉, co b, d = 〈{hb→ wd,sm→ hb,sm},wd〉, e = 〈{rb→¬sm,rb},¬sm〉, f = 〈{ec→¬rb,ec},¬rb〉, f o e, b à a, c à b, e à d, f à e, Arg = {a,b,c,d,e, f}, E(Arg,o) = {a,c,d, f}. G. Boella, C. da Costa Pereira, A. Tettamanzi, and L. van der Torre 3 Arguments and Belief Revision Belief revision is the process of changing beliefs to take into account a new piece of information. Traditionally the beliefs are modelled as propositions and the new piece of information is a proposition. In our model, instead, the belief base is made of arguments, and the new information is an argument too. Let ∗ be an argumentative belief revision operator, it is defined as the addition of the new argument to the base as the one with the highest priority. Given A = 〈H,h〉, a base of arguments Q and a priority relation oQ over Q: 〈Q,oQ〉 ∗A = 〈Q∪{A},o(Q,{A})〉 (1) where oQ⊂o(Q,{A}) ∧∀A′ ∈ Q AÂ(Q,{A}) A′. The new belief set can be derived from the new extension E(Q∪{A},o(Q,{A})) as the set of conclusions of arguments: B(Q∪{A},o(Q,{A})) = {h | ∃〈H,h〉 ∈ E(Q∪{A},o(Q,{A}))}. (2) Note that, given this definition, there is no warranty that the conclusion h of argument A is in the belief set; indeed, even if A is now the argument with highest priority, in the argument set Q there could be some argument A′ such that A′Ã(A. An argument A′ = 〈H ′,h′〉( A (i.e., h′ ≡ ¬h) would not be able to attack A, since AÂQ A′ by definition of revision. Instead, if A′ à A, it is possible that A does not undercut or rebut A′ in turn, and, thus, A′Ã(A, possibly putting it outside the extension if no argument defends it against A′. Success can be ensured only if the argument A can be supported by a set of arguments S with oS which, once added to Q, can defend A in Q and defend themselves too. Thus, it is necessary to extend the definition above to sets of arguments, to allow an argument to be defended: 〈Q,oQ〉 ∗ 〈S,oS〉= 〈Q∪S,o(Q,S)〉 (3) where the relative priority among the arguments in S is preserved, and they have priority over the arguments in Q: oQ⊂o(Q,S) ∧ ∀A′,A′′ ∈ S A′ Â(Q,S) A′′iff A′ ÂS A′′∧ ∀A ∈ S,∀A′ ∈ Q AÂ(Q,S) A′. Example 2. Q = {e},S = {d, f},d ÂS f , f ÂS d, 〈Q,oQ〉 ∗S = 〈Q∪S,o(Q,S)〉, E(Q∪S,o(Q,S)) = {d, f}, d Â(Q,S) e, f Â(Q,S) e,d Â(Q,S) f , f Â(Q,S) d, B(E({d,e, f},o(Q,S))) = {wd,sm}. Making Others Believe What They Want 4 An Abstract Agent Model The basic components of our language are beliefs and desires. Beliefs are represented by means of an argument base. A belief set is a finite and consistent set of propositional formulas describing the information the agent has about the world and internal information. Desires are represented by means of a desire set. A desire set consists of a set of propositional formulas which represent the situations the agent would like to achieve. However, unlike the belief set, a desire set may be inconsistent, e.g., {p,¬p}. Let L be a propositional language. Definition 4. The agent’s desire set is a possibly inconsistent finite set of sentences denoted by D, with D⊆L . Goals, in contrast to desires, are represented by consistent desire sets. We assume that an agent is equipped with two components: • an argument base 〈Arg,oArg〉 where Arg is a set of arguments and oArg is a priority ordering on arguments. • a desire set: D⊆L ; The mental state of an agent is described by a pair Σ = 〈〈Arg,oArg〉,D〉. In addition, we assume that each agent is provided with a goal selection function G, and a belief revision operator ∗, as discussed below. Definition 5. We define the belief set, B, of an agent, i.e., the set of all propositions in L the agent believes, in terms of the extension of its argument base 〈Arg,oArg〉: B = B(Arg,oArg) = {h | ∃〈H,h〉 ∈ E(Arg,oArg)}. We will denote by ΣS , ArgS , E(ArgS ,oArg) and BS , respectively, the mental state, the argument base, the extension of ArgS, and the belief set of an agent S . In general, given a problem, not all goals are achievable, i.e. it is not always possible to construct a plan for each goal. The goals which are not achievable or those which are not chosen to be achieved are called violated goals. Hence, we assume a problem-dependent function V that, given a belief base B and a goal set D′ ⊆ D, returns a set of couples 〈Da,Dv〉, where Da is a maximal subset of achievable goals and Dv is the subset of violated goals and is such that Dv = D′ \Da. Intuitively, by considering violated goals we can take into account, when comparing candidate goal sets, what we lose from not achieving goals. In order to act an agent has to take a decision among the different sets of goals he can achieve. The aim of this section is to illustrate a qualitative method for goal comparison in the agent theory. More precisely, we define a qualitative way in which an agent can choose among different sets of candidate goals. Indeed, from a desire set D, several candidate goal sets Di, 1≤ i≤ n, may be derived. How can an agent choose G. Boella, C. da Costa Pereira, A. Tettamanzi, and L. van der Torre among all the possible Di? It is unrealistic to assume that all goals have the same priority. We use the notion of preference (or urgence) of desires to represent how relevant each goal should be for the agent depending, for instance, on the reward for achieving it. The idea is that an agent should choose a set of candidate goals which contains the greatest number of achievable goals (or the least number of violated goals). We assume we dispose of a total pre-order o over an agent’s desires, where φ o ψ means desire φ is at least as preferred as desire ψ . The o relation can be extended from goals to sets of goals. We have that a goal set D1 is preferred to another one D2 if, considering only the goals occurring in either set, the most preferred goals are in D1. Note thato is connected and therefore a total pre-order, i.e., we always have D1 o D2 or D2 o D1 (or both). Definition 6. Goal set D1 is at least as important as goal set D2, denoted D1 oD2 iff the list of desires in D1 sorted by decreasing preference is lexicographically greater than the list of desires in D2 sorted by decreasing importance. If D1 oD2 and D2 o D1, D1 and D2 are said to be indifferent, denoted D1 ∼ D2. However, we also need to be able to compare the mutual exclusive subsets (achievable and violated goals) of the considered candidate goal, as defined below. We propose two methods to compare couples of goal sets. Given the oD criterion, a couple of goal sets 〈D1,D1〉 is at least as preferred as the couple 〈D2,D2〉, noted 〈D1,D1〉 oD 〈D2,D2〉 iff D1 o D2 and D1 1 D2. oD is reflexive and transitive but partial. 〈D1,D1〉 is strictly preferred to 〈D2,D2〉 in two cases: 1. D1 o D2 and D1 ≺ D2, or 2. D1  D2 and D1 1 D2. They are indifferent when D1 = D a 2 and D v 1 = D v 2. In all the other cases, they are not comparable. Given the oLex criterion, a couple of goal sets 〈D1,D1〉 is at least as preferred as the couple 〈D2,D2〉 (noted 〈D1,D1〉 oLex 〈D2,D2〉) iff D1 ∼ D2 and D1 ∼ D2; or there exists a φ ∈L such that both the following conditions hold: 1. ∀φ ′ o φ , the two couples are indifferent, i.e., one of the following possibilities holds: (a) φ ′ ∈ D1∩D2; (b) φ ′ 6∈ D1∪D1 and φ ′ 6∈ D2∪D2; (c) φ ′ ∈ D1∩D2. 2. Either φ ∈ D1 \D2 or φ ∈ D2 \D1. oLex is reflexive, transitive, and total. In general, given a set of desires D, there may be many possible candidate goal sets. An agent in state Σ = 〈Arg,D〉 must select precisely one of the most preferred couples of achievable and violated goals. Let us call G the function which maps a state Σ into the couple 〈Da,Dv〉 of goal sets selected by an agent in state Σ . G is such that, ∀Σ , if 〈D̄a, D̄v〉 is a couple of goal sets, then G(Σ)o 〈D̄a, D̄v〉, i.e., a rational agent always selects one of the most preferable couple of candidate goal sets [3]. Making Others Believe What They Want 5 An Abstract Model of Speaker-Receiver Interaction Using the above agent model, we consider two agents, S , the speaker, and R, the receiver. S wants to convince R of some proposition p. How does agent S construct a set of arguments S? Of course, S could include all the arguments in its base, but in this case it would risk to make his argumentation less appealing and thus to make R refuse to revise its beliefs, as discussed in the next section. Thus, we require that the set of arguments S to be communicated to R is minimal: even if there are alternative arguments for p, only one is included. We include the requirement that S is chosen using arguments which are not already believed by R. S is a minimal set among the T defined in the following way: T ⊆ ArgS ∧B(〈ArgR ,oArgR 〉 ∗ 〈T,oS 〉)) ` p. (4) Example 3. S = {a,c}, p = wd,ArgR = {b}E(ArgR ∪ S,o(ArgR ,S)) = {a,c}, B(E(ArgR ∪S,o(ArgR ,S))) = {wd,¬eg}. This definition has two shortcomings: first, such an S may not exist, since T could be empty. There is no reasonable way of assuring that S can always convince R: as we discussed in Section 3, no success can be assumed. Second, in some cases arguments in E(ArgR ∪ S,o(ArgR ,S)) may be among the ones believed by R but not by S . If they contribute to prove p, there would be a problem: ∃A ∈ ArgR\ArgS B(E((ArgR\{A})∪S,o(ArgR ,S))) 6` p This would qualify S as a not entirely sincere agent, since he would rely (even if he does not communicate them explicitly) on some arguments he does not believe, which are used in the construction of the extension from which p is proved. The second problem, instead, can be solved in the following way, by restricting set S not to require arguments not believed by S to defend S. S is now a minimal T such that T ⊆ ArgS and B(〈ArgR ,oArgR 〉 ∗ 〈T,oS 〉) ` p and ¬∃A ∈ ArgR\ArgS B(〈ArgR\{A},oArgR 〉 ∗ 〈T,oS 〉) 6` p Example 4. ArgS = {a,c, i},ArgR = {b,g,h},g à c,hÃ(g, iÃ(g. If S = {a,c}, p = wd: E(ArgR ∪S,o(ArgR ,S)) = {a,c,h}, B({a,b,c,g,h}) = {wd,¬eg . . .}. If S = {a,c, i}, p = wd: E(ArgR ∪S,o(ArgR ,S)) = {a,c, i}, B({a,b,c,g,h, i}) = {wd,¬eg . . .}. The belief revision system based on argumentation (see Section 2), is used to revise the public face of agents: the agents want to appear rational (otherwise they lose their status, reliability, trust, etc.) and, thus, when facing an acceptable argument (i.e., they do not know what to reply) have to admit that they believe it and to revise the beliefs which are inconsistent with it. We want to model social interactions among agent which do not necessarily tell the truth or trust each other completely, although they may pretend to. In such a setting, an agent revises its private beliefs only if someone provides an acceptable argument in the sense of Section 2. G. Boella, C. da Costa Pereira, A. Tettamanzi, and L. van der Torre Fig. 1 A diagram of mutual inclusion relations among the belief bases and sets involved in the interaction between S and R. Thus, while publicly an agent must pretend to be rational and thus shall revise its public belief base according to the system discussed in Section 3, nothing forbids an agent to privately follow other types of rules, not even necessarily rational. As a worst-case scenario (from S ’s standpoint), we assume that R uses a belief revision system based on Galbraith’s notion of conventional wisdom discussed in [2] as a proposal to model the way an irrational (but realistic) agent might revise its private beliefs. The idea is that different sets of arguments S1, . . . ,Sn lead to different belief revisions 〈Arg,oArg〉∗〈S1,oS1〉, . . . ,〈ArgArg,oArg〉∗〈Sn,oSn〉. R will privately accept the most appealing argument, i.e., the Si which maximizes the preferences according to the notion of Galbraith’s conventional wisdom. In order to formalize this idea, we have to define an order of appeal on sets of beliefs. Definition 7. Let Arg1 and Arg2 be two argument bases. Arg1 is more appealing than Arg2 to an agent, with respect to the agent’s desire set D, in symbols Arg1 o Arg2, if and only if G(〈〈Arg1,oArg1〉,D〉)o G(〈〈Arg2,oArg2〉,D〉). We will denote by • the private, CW-based belief revision operator. Given an acceptable argument set S, 〈ArgR ,oArgR 〉 • 〈S,oS〉 ∈ {〈ArgR ,oArgR 〉,〈ArgR ,oArgR 〉 ∗ 〈S,oS〉}. This definition is inspired to indeterministic belief revision [6]: “Most models of belief change are deterministic. Clearly, this is not a realistic feature, but it makes the models much simpler and easier to handle, not least from a computational point of view. In indeterministic belief change, the subjection of a specified belief base to a specified input has more than one admissible outcome. Indeterministic operators can be constructed as sets of deterministic operations. Hence, given n deterministic revision operators ∗1,∗2, . . . ,∗n, ∗ = {∗1,∗2, . . . ,∗n} can be used as an indeterministic operator.” We then define the notion of appealing argument, i.e., an argument which is preferred by the receiver R to the current state of its beliefs: Making Others Believe What They Want Definition 8. Let S be a minimal set of arguments that supports A = 〈H, p〉, such that S defends A and defends itself, as defined in the previous section: 〈ArgR ,oArgR 〉 • 〈S,oS〉= 〈ArgR ,oArgR 〉 ∗ 〈S,oS〉, i.e., R privately accepts revision 〈ArgR ,oArgR 〉 ∗ 〈S,oS〉, if 〈ArgR ,oArgR 〉 ∗ 〈S,oS〉 o 〈ArgR ,oArgR 〉 otherwise 〈ArgR ,oArgR 〉 • 〈S,oS〉= 〈ArgR ,oArgR 〉. Example 5. The investor of our example desires investing money. Assuming this is his only desire, we have DR = {im}. Now, the advisor S has two sets of arguments to persuade R that the dollar is weak, namely S1 = {a,c} and S2 = {d, f}. Let us assume that, according to the “planning module” of R, V (〈〈ArgR ,oArgR 〉 ∗ 〈S1,oS1〉,DR〉) = 〈{im}, / 0〉, V (〈〈ArgR ,oArgR 〉 ∗ 〈S2,oS2〉,DR〉) = 〈 / 0,{im}〉. Therefore, G(〈〈ArgR ,oArgR 〉∗〈S1,oS1〉,DR〉)oG(〈〈ArgR ,oArgR 〉∗〈S2,oS2〉,DR〉), because, by revising with S1 = {a,c}, R’s desire im is achievable. A necessary and sufficient condition for the public and private revisions to coincide is thus that the set of arguments S used to persuade an agent is the most appealing for the addressee, if one exists. Since CW-based belief revision is indeterministic and not revising is an alternative, R decides whether to keep the status quo of his beliefs or to adopt the belief revision resulting from the arguments proposed by S . Seen from S ’s standpoint, the task of persuading R of p is about comparing R’s belief revisions resulting from the different sets of arguments supporting p and acceptable by R, and choosing the set of arguments that appeals most to R. To define the notion of the most appealing set of arguments, we need to extend the order of appeal o to sets of arguments. Definition 9. Let S1 and S2 be two sets of arguments that defend themselves; S1 is more appealing to R than S2, in symbols S1 oR S2, if and only if 〈ArgR ,oArgR 〉 • 〈S1,oS1〉 o 〈ArgR ,oArgR 〉 • 〈S2,oS2〉. The most appealing set of arguments S∗p for persuading R of p, according to conventional wisdom, is, among all minimal sets of arguments S that support an A = 〈H, p〉, such that S defends A and S defends itself as defined in Section 5, the one that is maximal with respect to the appeal oR , i.e., such that S∗p oR S.
منابع مشابه
مدیر موفق کیست؟
Who is a really successful manager? A manager who spends less money, or the one who earns more? A manager who can survive for a longer period of time, or an administrator who expands his organization, and opens up new branches? Which one is the most successful? The article tries to answer these questions and provides, some simple guidlines for the managers in every domain of management who wan...
متن کاملفعال کردن بیماران افسرده
Extract: Leave me alone, I want to sleep, thanks, I don’t go to group meeting soon, what I said hadn't any importance for others, "No I don’t want to go to occupational therapy", I can't do any work and if make any things its vain probably, picnic? No I'm so tired and prefer to be here and watch the TV". Psychiatric wards nurses faced to these responses when they invite their patients to partic...
متن کاملChoice, Preference and Utility: A Response to Sommers1
Christina Sommers chides “gender feminists” for ignoring what women actually want in order to promote what they believe women ought to want. Sommers, however, ignores the crucial distinction between what women choose, given their current alternatives and what women would choose if their options were less restrictive. The costs, benefits and risks of pursuing the same goals are different for wom...
متن کاملSelective Beliefs over Preferences∗
People tend to believe what they want to believe. In this paper, I examine how people interpret others’ behavior to form beliefs about others’ preferences. I develop a “selective beliefs” model of social preferences in which people have intrinsic preferences over outcomes and also hold prior beliefs about the preferences of others. I propose that people attempt to selectively perceive and inter...
متن کاملاز خود بیگانگی در سازمان
The variables responsible for worker alienation in the under- developed countries are divided into two categories: cultural and organizational factors. Identification of these two types of factors in those countries should help us understand employee alienation better than that of the existing western models. In developing countries, there exist three types of behavioural dispositions (or eth...
متن کاملMargaret McCartney: Privacy is not secrecy.
Celebrities die, just like the rest of us. And—shock!—sometimes they do it without first making a public pronouncement. But, when a celebrity dies without an announcement of illness or a disclosure of the type of cancer, grieving family and friends are often accused of keeping the illness “secret.” This, in the froth of social media and newspaper commentary, is seen as a “very bad thing.” Rathe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008